23 research outputs found

    Disability design and innovation in computing research in low resource settings

    Get PDF
    80% of people with disabilities worldwide live in low resourced settings, rural areas, informal settlements and in multidimensional poverty. ICT4D leverages technological innovations to deliver programs for international development. But very few do so with a focus on and involving people with disabilities in low resource settings. Also, most studies largely focus on publishing the results of the research with a focus on the positive stories and not the learnings and recommendations regarding research processes. In short, researchers rarely examine what was challenging in the process of collaboration. We present reflections from the field across four studies. Our contributions are: (1) an overview of past work in computing with a focus on disability in low resource settings and (2) learnings and recommendations from four collaborative projects in Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI studies in low resource settings with communities with disabilities. We do this through a lens of Disability Interaction and ICT4D

    A Personalized Gesture Interaction System with User Identification Using Kinect

    No full text

    Cyborg werden: Möglichkeitshorizonte in feministischen Theorien und Science Fictions

    Get PDF
    Cyborgs waren ursprünglich ein Produkt technomilitärischer Imagination mit dem Ziel, die Beschränkungen des menschlichen Körpers zu überwinden. Als kybernetischer Organismus sind Cyborgs tatsächlich weder Mensch noch Maschine - und doch beides zugleich. Gerade dies macht sie für queer_feministische Spekulationen attraktiv, die Dualismen als Fundament von Herrschaftslogiken kritisieren. Die Autorin fragt danach, wie Cyborgs Dualismen zur Implosion bringen, wie sich mit Cyborgs Vorstellungen von Differenz jenseits von Dualismen entwickeln lassen und wie queer_feministische Geschichten in Theorien und Science Fictions unsere Möglichkeitshorizonte erweitern

    User-Defined Gestures for Augmented Reality

    Get PDF
    Abstract. Recently there has been an increase in research towards using hand gestures for interaction in the field of Augmented Reality (AR). These works have primarily focused on researcher designed gestures, while little is known about user preference and behavior for gestures in AR. In this paper, we present our guessability study for hand gestures in AR in which 800 gestures were elicited for 40 selected tasks from 20 participants. Using the agreement found among gestures, a user-defined gesture set was created to guide designers to achieve consistent user-centered gestures in AR. Wobbrock’s surface taxonomy has been extended to cover dimensionalities in AR and with it, characteristics of collected gestures have been derived. Common motifs which arose from the empirical findings were applied to obtain a better understanding of users’ thought and behavior. This work aims to lead to consistent user-centered designed gestures in AR

    User-Defined Body Gestures for an Interactive Storytelling Scenario

    No full text
    Abstract. For improving full body interaction in an interactive storytelling scenario, we conducted a study to get a user-defined gesture set. 22 users performed 251 gestures while running through the story script with real interaction disabled, but with hints of what set of actions was currently requested by the application. We describe our interaction design process, starting with the conduction of the study, continuing with the analysis of the recorded data including the creation of gesture taxonomy and the selection of gesture candidates, and ending with the integration of the gestures in our application

    Estimating the Perceived Difficulty of Pen Gestures

    Get PDF
    Part 1: Long and Short PapersInternational audienceOur empirical results show that users perceive the execution difficulty of single stroke gestures consistently, and execution difficulty is highly correlated with gesture production time. We use these results to design two simple rules for estimating execution difficulty: establishing the relative ranking of difficulty among multiple gestures; and classifying a single gesture into five levels of difficulty. We confirm that the CLC model does not provide an accurate prediction of production time magnitude, and instead show that a reasonably accurate estimate can be calculated using only a few gesture execution samples from a few people. Using this estimated production time, our rules, on average, rank gesture difficulty with 90% accuracy and rate gesture difficulty with 75% accuracy. Designers can use our results to choose application gestures, and researchers can build on our analysis in other gesture domains and for modeling gesture performance
    corecore